我们将存储系统视为任何技术认知系统的关键组成部分,这些系统可以在弥合用于推理,计划和语义场景的高级符号离散表示之间弥合差距,以了解用于控制,用于控制。在这项工作中,我们描述了概念和技术特征,其中的内存系统必须与基础数据表示一起实现。我们根据我们在开发ARMAR类人体机器人系统中获得的经验来确定这些特征,并讨论实践示例,这些例子证明了在以人为中心的环境中执行任务的类人生物机器人的记忆系统应支持,例如多模式,内态性,异性恋,Hetero关联性,可预测性或固有的发作结构。基于这些特征,我们将机器人软件框架ARMARX扩展到了统一的认知架构,该架构用于Armar Humanoid Robot家族的机器人。此外,我们描述了机器人软件的开发如何导致我们采用这种新颖的启用内存的认知体系结构,并展示了机器人如何使用内存来实现内存驱动的行为。
translated by 谷歌翻译
View-dependent effects such as reflections pose a substantial challenge for image-based and neural rendering algorithms. Above all, curved reflectors are particularly hard, as they lead to highly non-linear reflection flows as the camera moves. We introduce a new point-based representation to compute Neural Point Catacaustics allowing novel-view synthesis of scenes with curved reflectors, from a set of casually-captured input photos. At the core of our method is a neural warp field that models catacaustic trajectories of reflections, so complex specular effects can be rendered using efficient point splatting in conjunction with a neural renderer. One of our key contributions is the explicit representation of reflections with a reflection point cloud which is displaced by the neural warp field, and a primary point cloud which is optimized to represent the rest of the scene. After a short manual annotation step, our approach allows interactive high-quality renderings of novel views with accurate reflection flow. Additionally, the explicit representation of reflection flow supports several forms of scene manipulation in captured scenes, such as reflection editing, cloning of specular objects, reflection tracking across views, and comfortable stereo viewing. We provide the source code and other supplemental material on https://repo-sam.inria.fr/ fungraph/neural_catacaustics/
translated by 谷歌翻译
Brain-inspired computing proposes a set of algorithmic principles that hold promise for advancing artificial intelligence. They endow systems with self learning capabilities, efficient energy usage, and high storage capacity. A core concept that lies at the heart of brain computation is sequence learning and prediction. This form of computation is essential for almost all our daily tasks such as movement generation, perception, and language. Understanding how the brain performs such a computation is not only important to advance neuroscience but also to pave the way to new technological brain-inspired applications. A previously developed spiking neural network implementation of sequence prediction and recall learns complex, high-order sequences in an unsupervised manner by local, biologically inspired plasticity rules. An emerging type of hardware that holds promise for efficiently running this type of algorithm is neuromorphic hardware. It emulates the way the brain processes information and maps neurons and synapses directly into a physical substrate. Memristive devices have been identified as potential synaptic elements in neuromorphic hardware. In particular, redox-induced resistive random access memories (ReRAM) devices stand out at many aspects. They permit scalability, are energy efficient and fast, and can implement biological plasticity rules. In this work, we study the feasibility of using ReRAM devices as a replacement of the biological synapses in the sequence learning model. We implement and simulate the model including the ReRAM plasticity using the neural simulator NEST. We investigate the effect of different device properties on the performance characteristics of the sequence learning model, and demonstrate resilience with respect to different on-off ratios, conductance resolutions, device variability, and synaptic failure.
translated by 谷歌翻译
This paper describes several improvements to a new method for signal decomposition that we recently formulated under the name of Differentiable Dictionary Search (DDS). The fundamental idea of DDS is to exploit a class of powerful deep invertible density estimators called normalizing flows, to model the dictionary in a linear decomposition method such as NMF, effectively creating a bijection between the space of dictionary elements and the associated probability space, allowing a differentiable search through the dictionary space, guided by the estimated densities. As the initial formulation was a proof of concept with some practical limitations, we will present several steps towards making it scalable, hoping to improve both the computational complexity of the method and its signal decomposition capabilities. As a testbed for experimental evaluation, we choose the task of frame-level piano transcription, where the signal is to be decomposed into sources whose activity is attributed to individual piano notes. To highlight the impact of improved non-linear modelling of sources, we compare variants of our method to a linear overcomplete NMF baseline. Experimental results will show that even in the absence of additional constraints, our models produce increasingly sparse and precise decompositions, according to two pertinent evaluation measures.
translated by 谷歌翻译
We introduce a novel way to incorporate prior information into (semi-) supervised non-negative matrix factorization, which we call differentiable dictionary search. It enables general, highly flexible and principled modelling of mixtures where non-linear sources are linearly mixed. We study its behavior on an audio decomposition task, and conduct an extensive, highly controlled study of its modelling capabilities.
translated by 谷歌翻译
与高维数据集的探索性分析(例如主成分分析(PCA))相反,邻居嵌入(NE)技术倾向于更好地保留高维数据的局部结构/拓扑。然而,保留局部结构的能力是以解释性为代价的:诸如T-分布的随机邻居嵌入(T-SNE)或统一的歧管近似和投影(UMAP)等技术没有提供拓扑结构的介绍(UMAP)(UMAP)(UMAP)(UMAP)(UMAP)(UMAP)(UMAP)。在相应的嵌入中看到的群集)结构。在这里,我们提出了基于PCA,Q-残基和Hotelling的T2贡献的化学计量学领域的不同“技巧”,并结合了新型可视化方法,从而得出了邻居嵌入的局部和全局解释。我们展示了我们的方法如何使用标准的单变量或多变量方法来识别数据点组之间的歧视性特征。
translated by 谷歌翻译
随着时间的流逝,肿瘤体积和肿瘤特征的变化是癌症治疗的重要生物标志物。在这种情况下,FDG-PET/CT扫描通常用于癌症的分期和重新分期,因为放射性标记的荧光脱氧葡萄糖在高代谢的地区进行了。不幸的是,这些具有高代谢的区域不是针对肿瘤的特异性,也可以代表正常功能器官,炎症或感染的生理吸收,在这些扫描中使详细且可靠的肿瘤分割成为一项苛刻的任务。 AUTOPET挑战赛解决了这一研究差距,该挑战提供了来自900名患者的FDG-PET/CT扫描的公共数据集,以鼓励该领域进一步改善。我们对这一挑战的贡献是由两个最先进的分割模型组成的合奏,即NN-UNET和SWIN UNETR,并以最大强度投影分类器的形式增强,该分类器的作用像是门控机制。如果它预测了病变的存在,则两种分割都是通过晚期融合方法组合的。我们的解决方案在我们的交叉验证中诊断出患有肺癌,黑色素瘤和淋巴瘤的患者的骰子得分为72.12 \%。代码:https://github.com/heiligerl/autopet_submission
translated by 谷歌翻译
背景:基于学习的深度颈部淋巴结水平(HN_LNL)自动纤维与放射疗法研究和临床治疗计划具有很高的相关性,但在学术文献中仍被研究过。方法:使用35个规划CTS的专家划分的队列用于培训NNU-NEN 3D FULLES/2D-ENEBLEN模型,用于自动分片20不同的HN_LNL。验证是在独立的测试集(n = 20)中进行的。在一项完全盲目的评估中,3位临床专家在与专家创建的轮廓的正面比较中对深度学习自动分类的质量进行了评价。对于10个病例的亚组,将观察者内的变异性与深度学习自动分量性能进行了比较。研究了Autocontour与CT片平面方向的一致性对几何精度和专家评级的影响。结果:与专家创建的轮廓相比,对CT SLICE平面调整的深度学习分割的平均盲目专家评级明显好得多(81.0 vs. 79.6,p <0.001),但没有切片平面的深度学习段的评分明显差。专家创建的轮廓(77.2 vs. 79.6,p <0.001)。深度学习分割的几何准确性与观察者内变异性(平均骰子,0.78 vs. 0.77,p = 0.064)的几何准确性无关,并且在提高水平之间的准确性方面差异(p <0.001)。与CT切片平面方向一致性的临床意义未由几何精度指标(骰子,0.78 vs. 0.78 vs. 0.78,p = 0.572)结论:我们表明可以将NNU-NENE-NET 3D-FULLRES/2D-ENEMELBEND用于HN_LNL高度准确的自动限制仅使用有限的培训数据集,该数据集非常适合在研究环境中在HN_LNL的大规模标准化自动限制。几何准确度指标只是盲人专家评级的不完善的替代品。
translated by 谷歌翻译
相关光和电子显微镜是研究细胞内部结构的强大工具。它结合了相关光(LM)和电子(EM)显微镜信息的相互益处。但是,将LM叠加到EM图像以将功能分配给结构信息的经典方法受到LM图像中可见的结构细节的巨大差异的阻碍。本文旨在研究一种优化方法,我们称之为EM引导的反卷积。它试图将荧光标记的结构自动分配给EM图像中可见的细节,以弥合两种成像模式之间的分辨率和特异性的间隙。
translated by 谷歌翻译
在设计多模式系统时,模态选择是一个重要的步骤,尤其是在跨域活动识别的情况下,因为某些模态比其他模式更适合域移动。但是,仅选择具有积极贡献的方式需要系统的方法。我们通过提出一种无监督的模态选择方法(ModSelect)来解决此问题,该方法不需要任何地面真相标签。我们确定多个单峰分类器的预测与它们的嵌入之间的域差异之间的相关性。然后,我们系统地计算模态选择阈值,该阈值仅选择具有较高相关性和低域差异的模态。我们在实验中表明,我们的方法ModSelect仅选择具有积极贡献的模态,并始终提高合成到现实域的适应基准的性能,从而缩小域间隙。
translated by 谷歌翻译